We use cookies on this site to enhance your experience.
By selecting “Accept” and continuing to use this website, you consent to the use of cookies.
Originally published November 28, 2025
Instructors across higher education are wrestling with the presence of gen AI tools in their courses, and most urgently, their unauthorized use in student assignments. Instructors have quickly become familiar with the potential signs of gen AI use, including: “hallucinated” citations and information; the use of advanced and formal academic language coupled with distinctly shallow analysis (known as “workslop”); excessive use of bullets; and the misalignment of information with the attributed but accurate source. Yet, it remains hard to truly distinguish between AI-generated and human-generated output (Murray & Tersigni 2024).
While some call for adopting new AI detection software as a way to protect academic integrity – using one technology to catch out another – studies have shown that the findings of detectors are inaccurate, inconsistent and thus not reliable (Berdahl 2024; Perkins et al. 2024a; Perkins et al. 2024b). For this reason, Laurier has not adopted nor authorized any detection software for gen AI use in misconduct investigations. In fact, it is becoming clear that engaging in “the academic integrity arms race”, as Sarah Elaine Eaton (2022; 2023) calls it, is not the answer. This serves only to drag instructors into an endless loop of new technologies, student workarounds and new detection tools focused on cheating and surveillance.
Instead, instructors need to lean into what they do best – educating. Certainly, upholding academic integrity is a vital part of the role of the instructor, and Laurier has a clear academic misconduct process that attempts to balance reasonable suspicion of unauthorized gen AI use with opportunities for students to share additional evidence of knowledge and authorship. But alongside this process, we know that we don’t just ‘police’ integrity, we teach it. This means holding new conversations about the purpose of higher education, clarifying explanations of AI integration or prohibition of use in the course and redesigning activities and assignments to respond to the evolving gen AI context (Perkins et al. 2025; Eaton et al. 2025).
In this week’s edition, we invite you to consider ways of further developing and deepening your approaches to teaching by foregrounding evaluative judgement. When embedded explicitly and transparently in course design and classroom activities, building students’ capacities for evaluative judgement in the age of gen AI can provide an opportunity to achieve goals for transformative learning, academic skill development, and academic integrity.
When instructors look at research or content, they can use their expertise to evaluate its accuracy, the complexity or depth of the analysis, variation and creativity, the language choices made, the credibility of the sources, and, ultimately, answer the question: what makes it good?
Education researchers call this skill evaluative judgement, the capability “to judge the quality of ones’ own and others’ work” (Tai et al. 2018, p. 468). Teaching students evaluative judgement invites us to think about how we prepare students, as novices in our fields, to identify and evaluate the features of quality that experts can recognize and establish – whether that involves a research study, a novel, an artwork, a mainstream media article, a social media post or a gen AI output. Instructors can support students in developing their evaluative judgement of existing materials produced by others so that they can better assess their own quality when it is their turn to engage in the process of creating and knowing their work is good enough to press ‘submit’ (Tai et al. 2018; Bearman et al. 2024).
Whether as a part of a discussion, learning activity, or part of an assignment itself, researchers suggest instructors can develop evaluative judgement skills in the age of gen AI by providing:
Comparative reference materials for evaluation: Using credible sources to check for accuracy or validation, exemplars of both excellent and poor quality, a range of perspectives or analytical frameworks.
Elements or criteria of success (both component and holistic) with opportunities for feedback: Using approaches to self, peer, instructor, and potentially gen AI as the reviewer.
Standards of quality: Using completion checklists or questions to enhance critical thinking about the credibility of a source or about the perspectives, biases, or gaps.
Information on the successful steps, practice or strategies so students can evaluate what’s appropriate or effective for quality task or assessment completion: For example, research approach, experiment steps, appropriateness of gen AI use for the task, or effectiveness of gen AI prompt skills to support task completion.
Explore these examples from Laurier faculty about clarifying expectations around classroom gen AI use:
Lisa Kuron shares a three-pronged approach to academic integrity and gen AI;
Tarah Brookfield on creating meaningful assessments that require students to draw unique connections;
Kathleen Clarke on strategies she uses that continue to develop students’ writing skills;
Get inspired by ideas for redesigning assessments and activities, such as:
Danve Castroverde’s emphasis on scaffolded learning, detailed instructions, and transparent grading in assessments;
And Alanna Harman on designing assessments that encourage students to critique and think critically about gen outputs.
Did you know Laurier’s Writing Services offers both in-class and institution-wide gen AI and Writing Workshops?
In these workshops, Writing Consultants invite students to develop their evaluative judgement of gen AI in their writing with McLuhans’ “tetrad” (1988) for analyzing the impact of a medium or technology on society.
Connect with Writing Services to book a workshop and highlight their Gen AI and Writing workshop schedule for your students to gain gen AI literacy skills.